AMD's AI chip challenge to Nvidia remains uphill fight
Technology
AMD said the forthcoming chip will have 192 gigabytes of memory
(Reuters) - Advanced Micro Devices Inc gave new details about an artificial intelligence chip that will challenge market leader Nvidia Corp, but the company left out what Wall Street wanted to know - who plans to buy it.
Santa Clara, California-based AMD said the forthcoming chip, which will start trickling out in the third quarter followed by mass production beginning in the fourth quarter, will have 192 gigabytes of memory.
That could help tech companies get a handle on the spiraling cost of delivering services similar to ChatGPT, AMD Chief Executive Lisa Su told Reuters in an interview. She spoke following a keynote presentation in San Francisco during which Su showed an AI system on the MI300X chip writing a poem about the city.
"The more memory that you have, the larger the set of models" the chip can handle, Su said. "We've seen in customer workloads that it runs much faster. We really do think it's differentiating."
But unlike past presentations where AMD has talked up a major customer for a new chip, AMD did not say who will adopt the MI300X or a smaller version called the MI300A. The company gave no details on how much the chip will cost or how it will bolster its sales.
AMD's shares have doubled in price since the start of the year and touched a 16-month high earlier on Tuesday, but closed down 3.6% after the presentation on the AI strategy. Nvidia shares finished 3.9% higher at $410.22, making it the first chipmaker to close with a market capitalization above $1 trillion.
"I think the lack of a (large customer) saying they will use the MI300 A or X may have disappointed the Street. They want AMD to say they have replaced Nvidia in some design," said Kevin Krewell, principal analyst at TIRIAS Research.
Nvidia, whose shares have surged 170% so far this year, dominates the AI computing market with a market share of 80% to 95%, according to analysts.
Nvidia has few competitors working at a large scale. While Intel Corp and several startups such as Cerebras Systems and SambaNova Systems have competing products, Nvidia's biggest sales threat so far is the internal chip efforts at Alphabet Inc's Google and Amazon.com's cloud unit, both of which rent their custom chips to outside developers.
Aside from the AI market, AMD said it has started shipping high volumes of a general purpose central processor chip called "Bergamo" to companies such as Meta Platforms.
Alexis Black Bjorlin, who oversees computing infrastructure at Facebook parent Meta, said the firm has adopted the Bergamo chip, which targets a different part of AMD's data center business that caters to cloud computing providers and other large chip buyers.
But investors were searching for news on AI. Nvidia's lead there has come not only from its chips, but also from more than a decade of providing software tools to AI researchers and learning to anticipate what they will need in chips that take years to design.
AMD on Tuesday provided updates to its Rocm software, which competes against Nvidia's Cuda software platform.
Soumith Chintala, a Meta vice president who helped create open-source software for artificial intelligence, during the presentation said he has worked closely with AMD to make it easier for AI developers to use free tools to switch from the "single dominating vendor" of AI chips to other offerings like those from AMD.
"You don't actually have to do that much work - or almost no work in a lot of cases - to go from one platform to the other," Chintala said.
But analysts said just because sophisticated companies like Meta can wring good speeds from AMD chips, that was no promise of broader market traction with less sophisticated buyers.
"People still aren't convinced that AMD's software solution is competitive with Nvidia's, even if it is competitive on the hardware performance side," said Anshel Sag, an analyst at Moor Insights & Strategy.